fix: include raw LLM response in JSON parse error messages#1090
Open
octo-patch wants to merge 1 commit intoItzCrazyKns:masterfrom
Open
fix: include raw LLM response in JSON parse error messages#1090octo-patch wants to merge 1 commit intoItzCrazyKns:masterfrom
octo-patch wants to merge 1 commit intoItzCrazyKns:masterfrom
Conversation
When parsing LLM responses fails with a JSON schema validation or parse error, the raw response content is now included in the error message. This makes it much easier to debug malformed outputs from models that don't strictly follow the expected JSON format (fixes ItzCrazyKns#997).
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #997
Problem
When LLM responses fail JSON parsing or schema validation, the error message only shows the parse error but not the raw response from the model. This makes it nearly impossible to debug malformed outputs without modifying the underlying model server code to capture responses separately.
Solution
Include the raw response content in the error message for
generateObjectandstreamObjectmethods in the OpenAI and Ollama providers (Groq inherits from OpenAI, so it is covered too).Before:
After:
Testing
Manually verified the error message format change in both providers. No behavioral change — only the error message content is extended.
Summary by cubic
Adds the raw LLM response to JSON parse and schema validation error messages to make debugging malformed outputs easier. Applies to OpenAI and Ollama providers; Groq inherits the change. Fixes #997.
generateObjectand the chunk text instreamObjectwhen parsing fails.generateObjectwhen parsing fails.Written for commit bcee37a. Summary will update on new commits.